刚性对象的6D姿势的估计是计算机视觉中的一个基本问题。传统上,姿势估计与确定单一最佳估计有关。但是,单个估计无法表达视觉歧义,在许多情况下,由于对象对称或识别特征的阻塞,这在许多情况下是不可避免的。无法说明姿势的歧义可能会导致后续方法的失败,这是在失败成本高时无法接受的。完全姿势分布的估计与单个估计相反,非常适合表达姿势不确定性。由此激励,我们提出了一种新颖的姿势分布估计方法。对象姿势上概率分布的隐式公式来自对象的中间表示作为一组关键点。这样可以确保姿势分布估计值具有很高的解释性。此外,我们的方法基于保守近似,这导致可靠的估计。该方法已被评估在YCB-V和T-less数据集上旋转分布估计的任务,并在所有对象上可靠地执行。
translated by 谷歌翻译
许多工业组装任务都涉及孔洞孔,例如插入具有巨大公差的插入,即使在高度校准的机器人细胞中也很具有挑战性。可以采用视觉伺服来提高系统中不确定性的鲁棒性,但是,最先进的方法要么依赖于准确的3D模型用于合成渲染,要么手动参与训练数据。我们提出了一种新型的自我监督的视觉伺服涂方法,用于高精度钉插入,该方法是完全自动化的,不依赖合成数据。我们证明了其适用于将电子组件插入具有紧密公差的印刷电路板中。我们表明,可以通过我们提出的视觉伺服方法在强大但缓慢的基于力的插入策略之前大幅度地加速插入孔的插入,该方法的配置是完全自主的。
translated by 谷歌翻译
姿势估计是确定场景中对象的6D位置的任务。姿势估计有助于机器人设置的能力和灵活性。但是,必须将系统配置为用例,以充分执行。这种配置是耗时的,并限制了姿势估计的可用性,从而限制了机器人系统。深度学习是一种通过直接从数据集学习参数来克服此配置过程的方法。但是,获得此培训数据也可能非常耗时。合成训练数据的使用避免了此数据收集问题,但是需要对训练程序进行配置来克服域间隙问题。此外,还需要配置姿势估计参数。这种配置被开玩笑地称为研究生下降,因为参数被手动调整,直到获得令人满意的结果为止。本文介绍了一种仅使用合成数据自动配置的方法。这是通过学习网络训练期间的域随机化,然后使用域随机化来优化姿势估计参数来实现的。开发的方法显示了在具有挑战性的遮挡数据集中的最新性能82.0%的召回率,超过了所有以前的方法。这些结果证明了使用纯合成数据自动设置姿势估计的有效性。
translated by 谷歌翻译
我们提出了一种学习致密,连续的2D-3D对应分布的方法,这些方法来自数据表面的对象表面,没有实际上是对称性的视觉歧义。我们还使用所学习的分布来提出一个新的6D姿势估计的刚性物体,以便样本,得分和细化姿势假设。通过编码器 - 解码器查询模型和小型全连接键模型,在对象特定的潜空间中表示对应丢失的对应分布。我们的方法对于视觉歧义而言,我们表明查询和关键模型学习代表准确的多模态表面分布。我们的姿势估计方法显着提高了全面的BOP挑战,纯粹对合成数据训练的综合性挑战,甚至与在真实数据上培训的方法相比。项目网站位于https://surfemb.github.io/。
translated by 谷歌翻译
The distributed representation of symbols is one of the key technologies in machine learning systems today, playing a pivotal role in modern natural language processing. Traditional word embeddings associate a separate vector with each word. While this approach is simple and leads to good performance, it requires a lot of memory for representing a large vocabulary. To reduce the memory footprint, the default embedding layer in spaCy is a hash embeddings layer. It is a stochastic approximation of traditional embeddings that provides unique vectors for a large number of words without explicitly storing a separate vector for each of them. To be able to compute meaningful representations for both known and unknown words, hash embeddings represent each word as a summary of the normalized word form, subword information and word shape. Together, these features produce a multi-embedding of a word. In this technical report we lay out a bit of history and introduce the embedding methods in spaCy in detail. Second, we critically evaluate the hash embedding architecture with multi-embeddings on Named Entity Recognition datasets from a variety of domains and languages. The experiments validate most key design choices behind spaCy's embedders, but we also uncover a few surprising results.
translated by 谷歌翻译
Recent successes of massively overparameterized models have inspired a new line of work investigating the underlying conditions that enable overparameterized models to generalize well. This paper considers a framework where the possibly overparametrized model includes fake features, i.e., features that are present in the model but not in the data. We present a non-asymptotic high-probability bound on the generalization error of the ridge regression problem under the model misspecification of having fake features. Our high-probability results characterize the interplay between the implicit regularization provided by the fake features and the explicit regularization provided by the ridge parameter. We observe that fake features may improve the generalization error, even though they are irrelevant to the data.
translated by 谷歌翻译
We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the distributed estimation framework. We consider the well-established distributed learning algorithm \cocoa{}. We derive closed form expressions for the iterations for the overparametrized case. We illustrate the convergence and the error performance of the algorithm based on the over/under-parametrization of the problem. Our results show that depending on the problem dimensions and data generation assumptions, \cocoa{} can perform continual learning over a sequence of tasks, i.e., it can learn a new task without forgetting previously learned tasks, with access only to one task at a time.
translated by 谷歌翻译
We introduce an unsupervised learning approach that combines the truncated singular value decomposition with convex clustering to estimate within-cluster directions of maximum variance/covariance (in the variables) while simultaneously hierarchically clustering (on observations). In contrast to previous work on joint clustering and embedding, our approach has a straightforward formulation, is readily scalable via distributed optimization, and admits a direct interpretation as hierarchically clustered principal component analysis (PCA) or hierarchically clustered canonical correlation analysis (CCA). Through numerical experiments and real-world examples relevant to precision medicine, we show that our approach outperforms traditional and contemporary clustering methods on underdetermined problems ($p \gg N$ with tens of observations) and scales to large datasets (e.g., $N=100,000$; $p=1,000$) while yielding interpretable dendrograms of hierarchical per-cluster principal components or canonical variates.
translated by 谷歌翻译
Using 3D CNNs on high resolution medical volumes is very computationally demanding, especially for large datasets like the UK Biobank which aims to scan 100,000 subjects. Here we demonstrate that using 2D CNNs on a few 2D projections (representing mean and standard deviation across axial, sagittal and coronal slices) of the 3D volumes leads to reasonable test accuracy when predicting the age from brain volumes. Using our approach, one training epoch with 20,324 subjects takes 40 - 70 seconds using a single GPU, which is almost 100 times faster compared to a small 3D CNN. These results are important for researchers who do not have access to expensive GPU hardware for 3D CNNs.
translated by 谷歌翻译
Large annotated datasets are required to train segmentation networks. In medical imaging, it is often difficult, time consuming and expensive to create such datasets, and it may also be difficult to share these datasets with other researchers. Different AI models can today generate very realistic synthetic images, which can potentially be openly shared as they do not belong to specific persons. However, recent work has shown that using synthetic images for training deep networks often leads to worse performance compared to using real images. Here we demonstrate that using synthetic images and annotations from an ensemble of 10 GANs, instead of from a single GAN, increases the Dice score on real test images with 4.7 % to 14.0 % on specific classes.
translated by 谷歌翻译